Search-TTA: A Multimodal Test-Time Adaptation Framework for Visual Search in the Wild

Conference on Robot Learning (CoRL) 2025


1National University of Singapore, 2University of Toronto,
3IIT-Dhanbad, 4Singapore Technologies Engineering

Visual search for bears in simulated Yosemite Valley (colored path = simulation path).

TL;DR

Search-TTA is a multimodal test-time adaptation framework that significantly corrects poor VLM predictions due to domain mismatch or the lack of training data, given various input modalities (e.g. image, text, sound) and planning methods (e.g. RL) to achieve efficient visual navigation and search in-the-wild.

AVS-Bench is a visual search dataset based on internet-scale ecological data that contains up to 380k training and 8k validation satellite images, each with targets and their corresponding ground image, taxonomic label, and sound data.

Abstract

To perform outdoor autonomous visual navigation and search, a robot may leverage satellite imagery as a prior map. This can help inform high-level search and exploration strategies, even when such images lack sufficient resolution to allow for visual recognition of targets. However, approaches which leverage large Vision Language Models (VLMs) for generalization may yield inaccurate outputs due to hallucination, leading to inefficient search. To address this challenge, we introduce Search-TTA, a multimodal test-time adaptation framework with a flexible plug-and-play interface compatible with various input modalities (e.g. image, text, sound) and planning methods. First, we pretrain a satellite image encoder to align with CLIP's visual encoder to output probability distributions of target presence used for visual search. Second, our framework dynamically refines CLIP’s predictions during search using a test-time adaptation mechanism that performs uncertainty-weighted gradient updates online. To train and evaluate Search-TTA, we curate AVS-Bench, a visual search dataset based on internet-scale ecological data that contains 380k images. We find that Search-TTA improves planner performance by up to 30.0%, particularly in cases with poor initial CLIP predictions due to OOD scenarios and limited training data. It also performs comparably with significantly larger VLMs, and achieves zero-shot generalization to unseen modalities.

Test-Time Adaptation Feedback

In this work, we use CLIP as our lightweight VLM, and first align a satellite image encoder to the same representation space as a vision encoder through patch-level contrastive learning. This enables the satellite image encoder to generate a score map by taking the cosine similarity between its per-patch embeddings and the embeddings of other modalities (e.g., ground image, text, sound). We then introduce a novel test-time adaptation feedback mechanism to refine CLIP’s predictions. To achieve this, we take inspiration from Spatial Poisson Point Processes to perform gradient updates to the satellite image encoder based on past measurements. We also enhance the loss function with an uncertainty-driven weighting scheme that acts as a regularizer to ensure stable gradient updates.

Search-TTA Framework

TTA Example

AVS-Bench

To validate Search-TTA, we curate AVS-Bench, a visual search dataset based on internet-scale ecological data. It comprises Sentinel-2 level 2A satellite images with unseen taxonomic targets from the iNat-2021 dataset, each tagged with ground-level image and taxonomic label (some with sound data). One advantage of using ecological data is the hierarchical structure of taxonomic labels (seven distinct tiers), which facilitates baseline evaluation across various levels of specificity. AVS-Bench is diverse in geography and taxonomies to reflect in-the-wild scenarios. Our dataset offers 380k training images 8k validation images (in- and out-domain). Aside from Search-TTA, we use AVS-Bench to finetune LISA that output score maps and text explanations (demo link at the top).

AVS-Bench covers a diverse set of taxonomies across the world.

Results

Performance Analysis

TTA Performance Gain with Pretraining Data

VLM Inference Time Analysis



Multimodality

Emergent Alignment to Unseen Modalities




Combating Hallucination

TTA Performance Gain with Extent of Hallucination

TTA Comparisons

We validate Search-TTA on a wide variety of taxonomies. Play these videos to see the effectiveness of TTA.

With Real UAV Dynamics

Comparison without TTA

Comparison with LISA

🤗 HuggingFace Demo


BibTeX

    @inproceedings{tan2025searchtta,
      title        = {Search-TTA: A Multimodal Test-Time Adaptation Framework for Visual Search in the Wild},
      author       = {Derek Ming Siang Tan, Shailesh, Boyang Liu, Alok Raj, Qi Xuan Ang, Weiheng Dai, Tanishq Duhan, Jimmy Chiun, Yuhong Cao, Florian Shkurti, Guillaume Sartoretti},
      booktitle    = {Conference on Robot Learning},
      year         = {2025},
      organization = {PMLR}
    }

This work is completed during Derek's visiting researcher stint at the University of Toronto. Shailesh and Alok contributed to this work as interns at the National University of Singapore.